7,537 research outputs found
A model for digital preservation repository risk relationships
The paper introduces the Preserved Object and Repository Risk Ontology (PORRO), a model that relates preservation functionality with associated risks and opportunities for their mitigation. Building on work undertaken in a range of EU and UK funded research projects (including the Digital Curation Centre , DigitalPreservationEurope and DELOS ), this ontology illustrates relationships between fundamental digital library goals and their parameters; associated rights and responsibilities; practical activities and resources involved in their accomplishment; and risks facing digital libraries and their collections. Its purpose is to facilitate a comprehensive understanding of risk causality and to illustrate opportunities for mitigation and avoidance.
The ontology reflects evidence accumulated from a series of institutional audits and evaluations, including a specific subset of digital libraries in the DELOS project which led to the definition of a digital library preservation risk profile. Its applicability is intended to be widespread, and its coverage expected to evolve to reflect developments within the community.
Attendees will gain an understanding of the model and learn how they can utilize this online resource to inform their own risk management activities
In pursuit of an expressive vocabulary for preserved new media art
The status of the new media, interactive and performance art context appears to complicate our ability to follow conventional preservation approaches. Documentation of digital art materials has been determined to be an appropriate means of resolving associated difficulties, but this demands high levels of expressiveness to support the encapsulation of the myriad elements and qualities of content and context that may influence value and reproducibility. We discuss a proposed Vocabulary for Preserved New Media Works, a means of encapsulating the various information and material dimensions implicit within a work and required to ensure its ongoing availability
Reflections on preserving the state of new media art
As part of its work to explore emerging issues associated
with characterisation of digital materials, Planets has explored vocabularies and information structures for expressing the properties integral to the value of digital art. Value encompasses those qualities that must be understood and captured in order to ensure that art works’ sensory, emotional, mental and spiritual resonance remain. Facets of interactivity, modularity and temporality associated with digital art present some critical questions that the preservation community must increasingly be equipped to answer. Because digital art materials exhibit fundamental multidimensionality, validating the successful preservation of creative experience demands the explication of more than just file characteristics.
Understanding relationships between objects also implies
an understanding of their respective functional qualities.
This paper presents a Planets’ vocabulary for encapsulating contextual and implicit characteristics of digital art, optimised for preservation planning and validation
Digital curation and the cloud
Digital curation involves a wide range of activities, many of which could benefit from cloud
deployment to a greater or lesser extent. These range from infrequent, resource-intensive tasks
which benefit from the ability to rapidly provision resources to day-to-day collaborative activities
which can be facilitated by networked cloud services. Associated benefits are offset by risks
such as loss of data or service level, legal and governance incompatibilities and transfer
bottlenecks. There is considerable variability across both risks and benefits according to the
service and deployment models being adopted and the context in which activities are
performed. Some risks, such as legal liabilities, are mitigated by the use of alternative, e.g.,
private cloud models, but this is typically at the expense of benefits such as resource elasticity
and economies of scale. Infrastructure as a Service model may provide a basis on which more
specialised software services may be provided.
There is considerable work to be done in helping institutions understand the cloud and its
associated costs, risks and benefits, and how these compare to their current working methods,
in order that the most beneficial uses of cloud technologies may be identified. Specific
proposals, echoing recent work coordinated by EPSRC and JISC are the development of
advisory, costing and brokering services to facilitate appropriate cloud deployments, the
exploration of opportunities for certifying or accrediting cloud preservation providers, and
the targeted publicity of outputs from pilot studies to the full range of stakeholders within the
curation lifecycle, including data creators and owners, repositories, institutional IT support
professionals and senior manager
Engage - Using Data About Research Clusters to Enhance Collaboration
This project explored different classifications of research and ideas for implementing these in University systems to facilitate publicity of research
Recommended from our members
Entropy scaling based viscosity predictions for hydrocarbon mixtures and diesel fuels up to extreme conditions
An entropy scaling based technique using the Perturbed-Chain Statistical Associating Fluid Theory is described for predicting the viscosity of hydrocarbon mixtures and diesel fuels up to high temperatures and high pressures. The compounds found in diesel fuels or hydrocarbon mixtures are represented as a single pseudo-component. The model is not fit to viscosity data but is predictive up to high temperatures and pressures with input of only two calculated or measured mixture properties: the number averaged molecular weight and hydrogen to carbon ratio. Viscosity is predicted less accurately when the mixture contains high concentrations of iso-alkanes and cyclohexanes. However, it is shown that predictions for these mixtures are improved by fitting a third parameter to a single viscosity data point at a chosen reference state. For hydrocarbon mixtures, viscosity is predicted with average mean absolute percent deviations (MAPDs) of 12.2% using the two-parameter model and 7.3% using the three-parameter model from 293 to 353 K and up to 1000 bar. For two different diesel fuels, viscosity is predicted with an average MAPD of 21.4% using the two-parameter model and 9.4% using the three-parameter model from 323 to 423 K and up to 3500 bar
Recommended from our members
Modelling of Diesel fuel properties through its surrogates using Perturbed-Chain, Statistical Associating Fluid Theory
The Perturbed-Chain, Statistical Associating Fluid Theory equation of state is utilised to model the effect of pressure and temperature on the density, volatility and viscosity of four Diesel surrogates; these calculated properties are then compared to the properties of several Diesel fuels. Perturbed-Chain, Statistical Associating Fluid Theory calculations are performed using different sources for the pure component parameters. One source utilises literature values obtained from fitting vapour pressure and saturated liquid density data or from correlations based on these parameters. The second source utilises a group contribution method based on the chemical structure of each compound. Both modelling methods deliver similar estimations for surrogate density and volatility that are in close agreement with experimental results obtained at ambient pressure. Surrogate viscosity is calculated using the entropy scaling model with a new mixing rule for calculating mixture model parameters. The closest match of the surrogates to Diesel fuel properties provides mean deviations of 1.7% in density, 2.9% in volatility and 8.3% in viscosity. The Perturbed-Chain, Statistical Associating Fluid Theory results are compared to calculations using the Peng–Robinson equation of state; the greater performance of the Perturbed-Chain, Statistical Associating Fluid Theory approach for calculating fluid properties is demonstrated. Finally, an eight-component surrogate, with properties at high pressure and temperature predicted with the group contribution Perturbed-Chain, Statistical Associating Fluid Theory method, yields the best match for Diesel properties with a combined mean absolute deviation of 7.1% from experimental data found in the literature for conditions up to 373°K and 500 MPa. These results demonstrate the predictive capability of a state-of-the-art equation of state for Diesel fuels at extreme engine operating conditions
Recommended from our members
High-Temperature, High-Pressure Viscosities and Densities of n-Hexadecane, 2,2,4,4,6,8,8-Heptamethylnonane, and Squalane Measured Using a Universal Calibration for a Rolling-Ball Viscometer/Densimeter
The development of reference correlations for viscous fluids is predicated on the availability of accurate viscosity data, especially at high pressure, high temperature (HPHT) conditions. The rolling ball viscometer (RBV) is a facile technique for obtaining such HPHT viscosity data. A new, universal RBV calibration methodology is described and applied over a broad T-p region and for a wide range of viscosities. The new calibration equation is used to obtain viscosities for n-hexadecane (HXD), 2,2,4,4,6,8,8-heptamethylnonane (HMN), and 2,6,10,15,19,23-hexamethyltetracosane (squalane) from 298 – 530 K and pressures to 250 MPa. The available literature data base for HMN is expanded to 520 K and 175 MPa and for squalane to 525 K and 250 MPa. The combined expanded uncertainties are 0.6% and 2.5% for the densities and viscosities, respectively, each with a coverage factor, k = 2. The reliability of the viscosity data is validated by comparison of HXD and squalane viscosities to accepted reference correlations and HMN viscosities to available literature data. The necessity of this new calibration approach is confirmed by the large deviations observed between HXD, HMN, and squalane viscosities determined using the new, universal RBV calibration equation and viscosities determined using a quadratic polynomial calibration equation. HXD, HMN, and squalane densities are predicted with the Perturbed Chain Statistical Associating Fluid Theory using pure component parameters calculated with a previously reported group contribution (GC) method. HXD, HMN, and squalane viscosities are compared to Free Volume Theory (FVT) predictions using FVT parameters calculated from a literature correlation for nalkanes. Although the FVT predictions for HXD, a normal alkane, result in an average absolute percent deviation (∆AAD) of 3.8%, predictions for HMN and squalane, two branched alkanes, are four to 13 times larger. The fit of the FVT model for the branched alkanes is dramatically improved if the FVT parameters are allowed to vary with temperature
Bringing self assessment home: repository profiling and key lines of enquiry within DRAMBORA
Digital repositories are a manifestation of complex organizational, financial, legal, technological, procedural, and political interrelationships. Accompanying each of these are innate uncertainties, exacerbated by the relative immaturity of understanding prevalent within the digital preservation domain. Recent efforts have sought to identify core characteristics that must be demonstrable by successful digital repositories, expressed in the form of check-list documents, intended to support the processes of repository accreditation and certification. In isolation though, the available guidelines lack practical applicability; confusion over evidential requirements and difficulties associated with the diversity that exists among repositories (in terms of mandate, available resources, supported content and legal context) are particularly problematic. A gap exists between the available criteria and the ways and extent to which conformity can be demonstrated. The Digital Repository Audit Method Based on Risk Assessment (DRAMBORA) is a methodology for undertaking repository self assessment, developed jointly by the Digital Curation Centre (DCC) and DigitalPreservationEurope (DPE). DRAMBORA requires repositories to expose their organization, policies and infrastructures to rigorous scrutiny through a series of highly structured exercises, enabling them to build a comprehensive registry of their most pertinent risks, arranged into a structure that facilitates effective management. It draws on experiences accumulated throughout 18 evaluative pilot assessments undertaken in an internationally diverse selection of repositories, digital libraries and data centres (including institutions and services such as the UK National Digital Archive of Datasets, the National Archives of Scotland, Gallica at the National Library of France and the CERN Document Server). Other organizations, such as the British Library, have been using sections of DRAMBORA within their own risk assessment procedures.
Despite the attractive benefits of a bottom up approach, there are implicit challenges posed by neglecting a more objective perspective. Following a sustained period of pilot audits undertaken by DPE, DCC and the DELOS Digital Preservation Cluster aimed at evaluating DRAMBORA, it was stated that had respective project members not been present to facilitate each assessment, and contribute their objective, external perspectives, the results may have been less useful. Consequently, DRAMBORA has developed in a number of ways, to enable knowledge transfer from the responses of comparable repositories, and incorporate more opportunities for structured question sets, or key lines of enquiry, that provoke more comprehensive awareness of the applicability of particular threats and opportunities
Satellite versus ground-based estimates of burned area: a comparison between MODIS based burned area and fire agency reports over North America in 2007
North American wildfire management teams routinely assess burned area on site during firefighting campaigns; meanwhile, satellite observations provide systematic and global burned-area data. Here we compare satellite and ground-based daily burned area for wildfire events for selected large fires across North America in 2007 on daily timescales. In a sample of 26 fires across North America, we found the Global Fire Emissions Database Version 4 (GFED4) estimated about 80% of the burned area logged in ground-based Incident Status Summary (ICS-209) over 8-day analysis windows. Linear regression analysis found a slope between GFED and ICS-209 of 0.67 (with R = 0.96). The agreement between these data sets was found to degrade at short timescales (from R = 0.81 for 4-day to R = 0.55 for 2-day). Furthermore, during large burning days (> 3000 ha) GFED4 typically estimates half of the burned area logged in the ICS-209 estimates
- …